Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Elife ; 122024 Mar 12.
Article in English | MEDLINE | ID: mdl-38470243

ABSTRACT

Preserved communication abilities promote healthy ageing. To this end, the age-typical loss of sensory acuity might in part be compensated for by an individual's preserved attentional neural filtering. Is such a compensatory brain-behaviour link longitudinally stable? Can it predict individual change in listening behaviour? We here show that individual listening behaviour and neural filtering ability follow largely independent developmental trajectories modelling electroencephalographic and behavioural data of N = 105 ageing individuals (39-82 y). First, despite the expected decline in hearing-threshold-derived sensory acuity, listening-task performance proved stable over 2 y. Second, neural filtering and behaviour were correlated only within each separate measurement timepoint (T1, T2). Longitudinally, however, our results raise caution on attention-guided neural filtering metrics as predictors of individual trajectories in listening behaviour: neither neural filtering at T1 nor its 2-year change could predict individual 2-year behavioural change, under a combination of modelling strategies.


Humans are social animals. Communicating with other humans is vital for our social wellbeing, and having strong connections with others has been associated with healthier aging. For most humans, speech is an integral part of communication, but speech comprehension can be challenging in everyday social settings: imagine trying to follow a conversation in a crowded restaurant or decipher an announcement in a busy train station. Noisy environments are particularly difficult to navigate for older individuals, since age-related hearing loss can impact the ability to detect and distinguish speech sounds. Some aging individuals cope better than others with this problem, but the reason why, and how listening success can change over a lifetime, is poorly understood. One of the mechanisms involved in the segregation of speech from other sounds depends on the brain applying a 'neural filter' to auditory signals. The brain does this by aligning the activity of neurons in a part of the brain that deals with sounds, the auditory cortex, with fluctuations in the speech signal of interest. This neural 'speech tracking' can help the brain better encode the speech signals that a person is listening to. Tune and Obleser wanted to know whether the accuracy with which individuals can implement this filtering strategy represents a marker of listening success. Further, the researchers wanted to answer whether differences in the strength of the neural filtering observed between aging listeners could predict how their listening ability would develop, and determine whether these neural changes were connected with changes in people's behaviours. To answer these questions, Tune and Obleser used data collected from a group of healthy middle-aged and older listeners twice, two years apart. They then built mathematical models using these data to investigate how differences between individuals in the brain and in behaviours relate to each other. The researchers found that, across both timepoints, individuals with stronger neural filtering were better at distinguishing speech and listening. However, neural filtering strength measured at the first timepoint was not a good predictor of how well individuals would be able to listen two years later. Indeed, changes at the brain and the behavioural level occurred independently of each other. Tune and Obleser's findings will be relevant to neuroscientists, as well as to psychologists and audiologists whose goal is to understand differences between individuals in terms of listening success. The results suggest that neural filtering guided by attention to speech is an important readout of an individual's attention state. However, the results also caution against explaining listening performance based solely on neural factors, given that listening behaviours and neural filtering follow independent trajectories.


Subject(s)
Aging , Longevity , Adult , Humans , Brain , Auditory Perception , Benchmarking
2.
Neurosci Biobehav Rev ; 156: 105489, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38040075

ABSTRACT

Neural degeneration is a hallmark of healthy aging and can be associated with specific cognitive impairments. However, neural degeneration per se is not matched by unremitting declines in cognitive abilities. Instead, middle-aged and older adults typically maintain surprisingly high levels of cognitive functioning, suggesting that the human brain can adapt to structural degeneration by neural compensation. Here, we summarize prevailing theories and recent empirical studies on neural compensation with a focus on often neglected contributing factors, such as lifestyle, metabolism and neural plasticity. We suggest that these factors moderate the relationship between structural integrity and neural compensation, maintaining psychological well-being and behavioral functioning. Finally, we discuss that a breakdown in neural compensation may pose a tipping point that distinguishes the trajectories of healthy vs pathological aging, but conjoint support from psychology and cognitive neuroscience for this alluring view is still scarce. Therefore, future experiments that target the concomitant processes of neural compensation and associated behavior will foster a comprehensive understanding of both healthy and pathological aging.


Subject(s)
Cognitive Dysfunction , Cognitive Neuroscience , Middle Aged , Humans , Aged , Aging/psychology , Brain , Cognition
3.
J Neurosci ; 43(23): 4352-4364, 2023 06 07.
Article in English | MEDLINE | ID: mdl-37160365

ABSTRACT

Cognitive demand is thought to modulate two often used, but rarely combined, measures: pupil size and neural α (8-12 Hz) oscillatory power. However, it is unclear whether these two measures capture cognitive demand in a similar way under complex audiovisual-task conditions. Here we recorded pupil size and neural α power (using electroencephalography), while human participants of both sexes concurrently performed a visual multiple object-tracking task and an auditory gap detection task. Difficulties of the two tasks were manipulated independent of each other. Participants' performance decreased in accuracy and speed with increasing cognitive demand. Pupil size increased with increasing difficulty for both the auditory and the visual task. In contrast, α power showed diverging neural dynamics: parietal α power decreased with increasing difficulty in the visual task, but not with increasing difficulty in the auditory task. Furthermore, independent of task difficulty, within-participant trial-by-trial fluctuations in pupil size were negatively correlated with α power. Difficulty-induced changes in pupil size and α power, however, did not correlate, which is consistent with their different cognitive-demand sensitivities. Overall, the current study demonstrates that the dynamics of the neurophysiological indices of cognitive demand and associated effort are multifaceted and potentially modality-dependent under complex audiovisual-task conditions.SIGNIFICANCE STATEMENT Pupil size and oscillatory α power are associated with cognitive demand and effort, but their relative sensitivity under complex audiovisual-task conditions is unclear, as is the extent to which they share underlying mechanisms. Using an audiovisual dual-task paradigm, we show that pupil size increases with increasing cognitive demands for both audition and vision. In contrast, changes in oscillatory α power depend on the respective task demands: parietal α power decreases with visual demand but not with auditory task demand. Hence, pupil size and α power show different sensitivity to cognitive demands, perhaps suggesting partly different underlying neural mechanisms.


Subject(s)
Auditory Perception , Pupil , Male , Female , Humans , Pupil/physiology , Auditory Perception/physiology , Electroencephalography , Psychomotor Performance/physiology , Cognition
4.
Sci Adv ; 7(49): eabi6070, 2021 Dec 03.
Article in English | MEDLINE | ID: mdl-34860554

ABSTRACT

How do predictions in the brain incorporate the temporal unfolding of context in our natural environment? We here provide evidence for a neural coding scheme that sparsely updates contextual representations at the boundary of events. This yields a hierarchical, multilayered organization of predictive language comprehension. Training artificial neural networks to predict the next word in a story at five stacked time scales and then using model-based functional magnetic resonance imaging, we observe an event-based "surprisal hierarchy" evolving along a temporoparietal pathway. Along this hierarchy, surprisal at any given time scale gated bottom-up and top-down connectivity to neighboring time scales. In contrast, surprisal derived from continuously updated context influenced temporoparietal activity only at short time scales. Representing context in the form of increasingly coarse events constitutes a network architecture for making predictions that is both computationally efficient and contextually diverse.

5.
Dev Cogn Neurosci ; 52: 101034, 2021 12.
Article in English | MEDLINE | ID: mdl-34781250

ABSTRACT

Humans are born into a social environment and from early on possess a range of abilities to detect and respond to social cues. In the past decade, there has been a rapidly increasing interest in investigating the neural responses underlying such early social processes under naturalistic conditions. However, the investigation of neural responses to continuous dynamic input poses the challenge of how to link neural responses back to continuous sensory input. In the present tutorial, we provide a step-by-step introduction to one approach to tackle this issue, namely the use of linear models to investigate neural tracking responses in electroencephalographic (EEG) data. While neural tracking has gained increasing popularity in adult cognitive neuroscience over the past decade, its application to infant EEG is still rare and comes with its own challenges. After introducing the concept of neural tracking, we discuss and compare the use of forward vs. backward models and individual vs. generic models using an example data set of infant EEG data. Each section comprises a theoretical introduction as well as a concrete example using MATLAB code. We argue that neural tracking provides a promising way to investigate early (social) processing in an ecologically valid setting.


Subject(s)
Child Development , Electroencephalography , Social Behavior , Humans , Infant
7.
PLoS Biol ; 19(10): e3001410, 2021 10.
Article in English | MEDLINE | ID: mdl-34634031

ABSTRACT

In multi-talker situations, individuals adapt behaviorally to this listening challenge mostly with ease, but how do brain neural networks shape this adaptation? We here establish a long-sought link between large-scale neural communications in electrophysiology and behavioral success in the control of attention in difficult listening situations. In an age-varying sample of N = 154 individuals, we find that connectivity between intrinsic neural oscillations extracted from source-reconstructed electroencephalography is regulated according to the listener's goal during a challenging dual-talker task. These dynamics occur as spatially organized modulations in power-envelope correlations of alpha and low-beta neural oscillations during approximately 2-s intervals most critical for listening behavior relative to resting-state baseline. First, left frontoparietal low-beta connectivity (16 to 24 Hz) increased during anticipation and processing of a spatial-attention cue before speech presentation. Second, posterior alpha connectivity (7 to 11 Hz) decreased during comprehension of competing speech, particularly around target-word presentation. Connectivity dynamics of these networks were predictive of individual differences in the speed and accuracy of target-word identification, respectively, but proved unconfounded by changes in neural oscillatory activity strength. Successful adaptation to a listening challenge thus latches onto two distinct yet complementary neural systems: a beta-tuned frontoparietal network enabling the flexible adaptation to attentive listening state and an alpha-tuned posterior network supporting attention to speech.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiology , Nerve Net/physiology , Adult , Aged , Aged, 80 and over , Alpha Rhythm/physiology , Behavior , Beta Rhythm/physiology , Brain Mapping , Electroencephalography , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Rest/physiology , Task Performance and Analysis
8.
Nat Commun ; 12(1): 4533, 2021 07 26.
Article in English | MEDLINE | ID: mdl-34312388

ABSTRACT

Successful listening crucially depends on intact attentional filters that separate relevant from irrelevant information. Research into their neurobiological implementation has focused on two potential auditory filter strategies: the lateralization of alpha power and selective neural speech tracking. However, the functional interplay of the two neural filter strategies and their potency to index listening success in an ageing population remains unclear. Using electroencephalography and a dual-talker task in a representative sample of listeners (N = 155; age=39-80 years), we here demonstrate an often-missed link from single-trial behavioural outcomes back to trial-by-trial changes in neural attentional filtering. First, we observe preserved attentional-cue-driven modulation of both neural filters across chronological age and hearing levels. Second, neural filter states vary independently of one another, demonstrating complementary neurobiological solutions of spatial selective attention. Stronger neural speech tracking but not alpha lateralization boosts trial-to-trial behavioural performance. Our results highlight the translational potential of neural speech tracking as an individualized neural marker of adaptive listening behaviour.


Subject(s)
Aging/physiology , Attention/physiology , Auditory Perception/physiology , Hearing Loss/physiopathology , Hearing/physiology , Neural Pathways/physiology , Acoustic Stimulation/methods , Adult , Aged , Aged, 80 and over , Algorithms , Electroencephalography/methods , Female , Hearing Loss/diagnosis , Humans , Male , Middle Aged , Models, Neurological , Speech Perception/physiology
9.
Trends Hear ; 25: 23312165211013242, 2021.
Article in English | MEDLINE | ID: mdl-34184964

ABSTRACT

Hearing loss is often asymmetric such that hearing thresholds differ substantially between the two ears. The extreme case of such asymmetric hearing is single-sided deafness. A unilateral cochlear implant (CI) on the more severely impaired ear is an effective treatment to restore hearing. The interactive effects of unilateral acoustic degradation and spatial attention to one sound source in multitalker situations are at present unclear. Here, we simulated some features of listening with a unilateral CI in young, normal-hearing listeners (N = 22) who were presented with 8-band noise-vocoded speech to one ear and intact speech to the other ear. Neural responses were recorded in the electroencephalogram to obtain the spectrotemporal response function to speech. Listeners made more mistakes when answering questions about vocoded (vs. intact) attended speech. At the neural level, we asked how unilateral acoustic degradation would impact the attention-induced amplification of tracking target versus distracting speech. Interestingly, unilateral degradation did not per se reduce the attention-induced amplification but instead delayed it in time: Speech encoding accuracy, modelled on the basis of the spectrotemporal response function, was significantly enhanced for attended versus ignored intact speech at earlier neural response latencies (<∼250 ms). This attentional enhancement was not absent but delayed for vocoded speech. These findings suggest that attentional selection of unilateral, degraded speech is feasible but induces delayed neural separation of competing speech, which might explain listening challenges experienced by unilateral CI users.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Acoustic Stimulation , Acoustics , Humans , Speech
10.
iScience ; 24(4): 102345, 2021 Apr 23.
Article in English | MEDLINE | ID: mdl-33870139

ABSTRACT

Slow neurobiological rhythms, such as the circadian secretion of glucocorticoid (GC) hormones, modulate a variety of body functions. Whether and how endocrine fluctuations also exert an influence on perceptual abilities is largely uncharted. Here, we show that phasic increases in GC availability prove beneficial to auditory discrimination. In an age-varying sample of N = 68 healthy human participants, we characterize the covariation of saliva cortisol with perceptual sensitivity in an auditory pitch discrimination task at five time points across the sleep-wake cycle. First, momentary saliva cortisol levels were captured well by the time relative to wake-up and overall sleep duration. Second, within individuals, higher cortisol levels just prior to behavioral testing predicted better pitch discrimination ability, expressed as a steepened psychometric curve. This effect of GCs held under a set of statistical controls. Our results pave the way for more in-depth studies on neuroendocrinological determinants of sensory encoding and perception.

11.
Elife ; 82019 12 10.
Article in English | MEDLINE | ID: mdl-31820732

ABSTRACT

Instantaneous brain states have consequences for our sensation, perception, and behaviour. Fluctuations in arousal and neural desynchronization likely pose perceptually relevant states. However, their relationship and their relative impact on perception is unclear. We here show that, at the single-trial level in humans, local desynchronization in sensory cortex (expressed as time-series entropy) versus pupil-linked arousal differentially impact perceptual processing. While we recorded electroencephalography (EEG) and pupillometry data, stimuli of a demanding auditory discrimination task were presented into states of high or low desynchronization of auditory cortex via a real-time closed-loop setup. Desynchronization and arousal distinctly influenced stimulus-evoked activity and shaped behaviour displaying an inverted u-shaped relationship: States of intermediate desynchronization elicited minimal response bias and fastest responses, while states of intermediate arousal gave rise to highest response sensitivity. Our results speak to a model in which independent states of local desynchronization and global arousal jointly optimise sensory processing and performance.


Subject(s)
Arousal , Auditory Cortex/physiology , Auditory Perception , Brain/physiology , Cortical Synchronization , Pupil/physiology , Acoustic Stimulation , Adult , Electroencephalography , Female , Humans , Male , Young Adult
12.
Atten Percept Psychophys ; 81(4): 1108-1118, 2019 May.
Article in English | MEDLINE | ID: mdl-30993655

ABSTRACT

When one is listening, familiarity with an attended talker's voice improves speech comprehension. Here, we instead investigated the effect of familiarity with a distracting talker. In an irrelevant-speech task, we assessed listeners' working memory for the serial order of spoken digits when a task-irrelevant, distracting sentence was produced by either a familiar or an unfamiliar talker (with rare omissions of the task-irrelevant sentence). We tested two groups of listeners using the same experimental procedure. The first group were undergraduate psychology students (N = 66) who had attended an introductory statistics course. Critically, each student had been taught by one of two course instructors, whose voices served as the familiar and unfamiliar task-irrelevant talkers. The second group of listeners were family members and friends (N = 20) who had known either one of the two talkers for more than 10 years. Students, but not family members and friends, made more errors when the task-irrelevant talker was familiar versus unfamiliar. Interestingly, the effect of talker familiarity was not modulated by the presence of task-irrelevant speech: Students experienced stronger working memory disruption by a familiar talker, irrespective of whether they heard a task-irrelevant sentence during memory retention or merely expected it. While previous work has shown that familiarity with an attended talker benefits speech comprehension, our findings indicate that familiarity with an ignored talker disrupts working memory for target speech. The absence of this effect in family members and friends suggests that the degree of familiarity modulates the memory disruption.


Subject(s)
Acoustic Stimulation/psychology , Memory, Short-Term/physiology , Recognition, Psychology/physiology , Speech Perception/physiology , Task Performance and Analysis , Adolescent , Adult , Aged , Comprehension , Female , Hearing , Humans , Language , Male , Middle Aged , Voice , Young Adult
13.
Proc Natl Acad Sci U S A ; 116(2): 660-669, 2019 01 08.
Article in English | MEDLINE | ID: mdl-30587584

ABSTRACT

Speech comprehension in noisy, multitalker situations poses a challenge. Successful behavioral adaptation to a listening challenge often requires stronger engagement of auditory spatial attention and context-dependent semantic predictions. Human listeners differ substantially in the degree to which they adapt behaviorally and can listen successfully under such circumstances. How cortical networks embody this adaptation, particularly at the individual level, is currently unknown. We here explain this adaptation from reconfiguration of brain networks for a challenging listening task (i.e., a linguistic variant of the Posner paradigm with concurrent speech) in an age-varying sample of n = 49 healthy adults undergoing resting-state and task fMRI. We here provide evidence for the hypothesis that more successful listeners exhibit stronger task-specific reconfiguration (hence, better adaptation) of brain networks. From rest to task, brain networks become reconfigured toward more localized cortical processing characterized by higher topological segregation. This reconfiguration is dominated by the functional division of an auditory and a cingulo-opercular module and the emergence of a conjoined auditory and ventral attention module along bilateral middle and posterior temporal cortices. Supporting our hypothesis, the degree to which modularity of this frontotemporal auditory control network is increased relative to resting state predicts individuals' listening success in states of divided and selective attention. Our findings elucidate how fine-tuned cortical communication dynamics shape selection and comprehension of speech. Our results highlight modularity of the auditory control network as a key organizational principle in cortical implementation of auditory spatial attention in challenging listening situations.


Subject(s)
Auditory Cortex/physiology , Nerve Net/physiology , Sound Localization/physiology , Speech Perception/physiology , Adult , Aged , Auditory Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Nerve Net/diagnostic imaging
14.
Eur J Neurosci ; 48(7): 2537-2550, 2018 10.
Article in English | MEDLINE | ID: mdl-29430736

ABSTRACT

In recent years, hemispheric lateralisation of alpha power has emerged as a neural mechanism thought to underpin spatial attention across sensory modalities. Yet, how healthy ageing, beginning in middle adulthood, impacts the modulation of lateralised alpha power supporting auditory attention remains poorly understood. In the current electroencephalography study, middle-aged and older adults (N = 29; ~40-70 years) performed a dichotic listening task that simulates a challenging, multitalker scenario. We examined the extent to which the modulation of 8-12 Hz alpha power would serve as neural marker of listening success across age. With respect to the increase in interindividual variability with age, we examined an extensive battery of behavioural, perceptual and neural measures. Similar to findings on younger adults, middle-aged and older listeners' auditory spatial attention induced robust lateralisation of alpha power, which synchronised with the speech rate. Notably, the observed relationship between this alpha lateralisation and task performance did not co-vary with age. Instead, task performance was strongly related to an individual's attentional and working memory capacity. Multivariate analyses revealed a separation of neural and behavioural variables independent of age. Our results suggest that in age-varying samples as the present one, the lateralisation of alpha power is neither a sufficient nor necessary neural strategy for an individual's auditory spatial attention, as higher age might come with increased use of alternative, compensatory mechanisms. Our findings emphasise that explaining interindividual variability will be key to understanding the role of alpha oscillations in auditory attention in the ageing listener.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Memory, Short-Term/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Age Factors , Aged , Electroencephalography/methods , Female , Humans , Male , Middle Aged , Perceptual Masking/physiology , Speech Perception/physiology
15.
Cell Mol Life Sci ; 74(2): 339-358, 2017 01.
Article in English | MEDLINE | ID: mdl-27554772

ABSTRACT

Amyotrophic lateral sclerosis (ALS) is a fatal motor neuron disease. Neuronal vacuolization and glial activation are pathologic hallmarks in the superoxide dismutase 1 (SOD1) mouse model of ALS. Previously, we found the neuropeptide calcitonin gene-related peptide (CGRP) associated with vacuolization and astrogliosis in the spinal cord of these mice. We now show that CGRP abundance positively correlated with the severity of astrogliosis, but not vacuolization, in several motor and non-motor areas throughout the brain. SOD1 mice harboring a genetic depletion of the ßCGRP isoform showed reduced CGRP immunoreactivity associated with vacuolization, while motor functions, body weight, survival, and astrogliosis were not altered. When CGRP signaling was completely disrupted through genetic depletion of the CGRP receptor component, receptor activity-modifying protein 1 (RAMP1), hind limb muscle denervation, and loss of muscle performance were accelerated, while body weight and survival were not affected. Dampened neuroinflammation, i.e., reduced levels of astrogliosis in the brain stem already in the pre-symptomatic disease stage, and reduced microgliosis and lymphocyte infiltrations during the late disease phase were additional neuropathology features in these mice. On the molecular level, mRNA expression levels of brain-derived neurotrophic factor (BDNF) and those of the anti-inflammatory cytokine interleukin 6 (IL-6) were elevated, while those of several pro-inflammatory cytokines found reduced in the brain stem of RAMP1-deficient SOD1 mice at disease end stage. Our results thus identify an important, possibly dual role of CGRP in ALS pathogenesis.


Subject(s)
Brain/pathology , Calcitonin Gene-Related Peptide/metabolism , Inflammation/metabolism , Inflammation/pathology , Muscle Denervation , Signal Transduction , Superoxide Dismutase-1/genetics , Animals , Astrocytes/metabolism , Astrocytes/pathology , Brain/metabolism , Cell Death , Chemokines/metabolism , Disease Progression , Gene Expression Regulation , Humans , Hybridization, Genetic , Lymphocytes/pathology , Mice, Mutant Strains , Mice, Transgenic , Models, Biological , Motor Neurons/metabolism , Motor Neurons/pathology , Nerve Growth Factors/metabolism , Receptor Activity-Modifying Protein 1/deficiency , Receptor Activity-Modifying Protein 1/metabolism , Superoxide Dismutase-1/metabolism , Vacuoles/metabolism
16.
J Neurosci ; 36(48): 12180-12191, 2016 11 30.
Article in English | MEDLINE | ID: mdl-27903727

ABSTRACT

The hierarchical organization of human cortical circuits integrates information across different timescales via temporal receptive windows, which increase in length from lower to higher levels of the cortical hierarchy (Hasson et al., 2015). A recent neurobiological model of higher-order language processing (Bornkessel-Schlesewsky et al., 2015) posits that temporal receptive windows in the dorsal auditory stream provide the basis for a hierarchically organized predictive coding architecture (Friston and Kiebel, 2009). In this stream, a nested set of internal models generates time-based ("when") predictions for upcoming input at different linguistic levels (sounds, words, sentences, discourse). Here, we used naturalistic stories to test the hypothesis that multi-sentence, discourse-level predictions are processed in the dorsal auditory stream, yielding attenuated BOLD responses for highly predicted versus less strongly predicted language input. The results were as hypothesized: discourse-related cues, such as passive voice, which effect a higher predictability of remention for a character at a later point within a story, led to attenuated BOLD responses for auditory input of high versus low predictability within the dorsal auditory stream, specifically in the inferior parietal lobule, middle frontal gyrus, and dorsal parts of the inferior frontal gyrus, among other areas. Additionally, we found effects of content-related ("what") predictions in ventral regions. These findings provide novel evidence that hierarchical predictive coding extends to discourse-level processing in natural language. Importantly, they ground language processing on a hierarchically organized predictive network, as a common underlying neurobiological basis shared with other brain functions. SIGNIFICANCE STATEMENT: Language is the most powerful communicative medium available to humans. Nevertheless, we lack an understanding of the neurobiological basis of language processing in natural contexts: it is not clear how the human brain processes linguistic input within the rich contextual environments of our everyday language experience. This fMRI study provides the first demonstration that, in natural stories, predictions concerning the probability of remention of a protagonist at a later point are processed in the dorsal auditory stream. Results are congruent with a hierarchical predictive coding architecture assuming temporal receptive windows of increasing length from auditory to higher-order cortices. Accordingly, language processing in rich contextual settings can be explained via domain-general, neurobiological mechanisms of information processing in the human brain.


Subject(s)
Attention/physiology , Auditory Cortex/physiology , Comprehension/physiology , Cues , Nerve Net/physiology , Speech Perception/physiology , Adult , Auditory Pathways/physiology , Brain Mapping , Communication , Decision Making/physiology , Female , Humans , Male , Narration , Pattern Recognition, Physiological/physiology , Time Factors
17.
Neuroimage ; 136: 10-25, 2016 Aug 01.
Article in English | MEDLINE | ID: mdl-27177762

ABSTRACT

Human language allows us to express our thoughts and ideas by combining entities, concepts and actions into multi-event episodes. Yet, the functional neuroanatomy engaged in interpretation of such high-level linguistic input remains poorly understood. Here, we used easy to detect and more subtle "borderline" anomalies to investigate the brain regions and mechanistic principles involved in the use of real-world event knowledge in language comprehension. Overall, the results showed that the processing of sentences in context engages a complex set of bilateral brain regions in the frontal, temporal and inferior parietal lobes. Easy anomalies preferentially engaged lower-order cortical areas adjacent to the primary auditory cortex. In addition, the left supramarginal gyrus and anterior temporal sulcus as well as the right posterior middle temporal gyrus contributed to the processing of easy and borderline anomalies. The observed pattern of results is explained in terms of (i) hierarchical processing along a dorsal-ventral axis and (ii) the assumption of high-order association areas serving as cortical hubs in the convergence of information in a distributed network. Finally, the observed modulation of BOLD signal in prefrontal areas provides support for their role in the implementation of executive control processes.


Subject(s)
Auditory Cortex/physiology , Comprehension/physiology , Concept Formation/physiology , Language , Semantics , Speech Perception/physiology , Adult , Brain Mapping , Female , Humans , Male , Nerve Net/physiology
19.
Hum Brain Mapp ; 36(11): 4231-46, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26356583

ABSTRACT

The neural correlates of theory of mind (ToM) are typically studied using paradigms which require participants to draw explicit, task-related inferences (e.g., in the false belief task). In a natural setup, such as listening to stories, false belief mentalizing occurs incidentally as part of narrative processing. In our experiment, participants listened to auditorily presented stories with false belief passages (implicit false belief processing) and immediately after each story answered comprehension questions (explicit false belief processing), while neural responses were measured with functional magnetic resonance imaging (fMRI). All stories included (among other situations) one false belief condition and one closely matched control condition. For the implicit ToM processing, we modeled the hemodynamic response during the false belief passages in the story and compared it to the hemodynamic response during the closely matched control passages. For implicit mentalizing, we found activation in typical ToM processing regions, that is the angular gyrus (AG), superior medial frontal gyrus (SmFG), precuneus (PCUN), middle temporal gyrus (MTG) as well as in the inferior frontal gyrus (IFG) billaterally. For explicit ToM, we only found AG activation. The conjunction analysis highlighted the left AG and MTG as well as the bilateral IFG as overlapping ToM processing regions for both implicit and explicit modes. Implicit ToM processing during listening to false belief passages, recruits the left SmFG and billateral PCUN in addition to the "mentalizing network" known form explicit processing tasks.


Subject(s)
Brain Mapping/methods , Cerebral Cortex/physiology , Comprehension/physiology , Theory of Mind/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
20.
Neuropsychologia ; 56: 147-66, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24447768

ABSTRACT

The N400 event-related brain potential (ERP) has played a major role in the examination of how the human brain processes meaning. For current theories of the N400, classes of semantic inconsistencies which do not elicit N400 effects have proven particularly influential. Semantic anomalies that are difficult to detect are a case in point ("borderline anomalies", e.g. "After an air crash, where should the survivors be buried?"), engendering a late positive ERP response but no N400 effect in English (Sanford, Leuthold, Bohan, & Sanford, 2011). In three auditory ERP experiments, we demonstrate that this result is subject to cross-linguistic variation. In a German version of Sanford and colleagues' experiment (Experiment 1), detected borderline anomalies elicited both N400 and late positivity effects compared to control stimuli or to missed borderline anomalies. Classic easy-to-detect semantic (non-borderline) anomalies showed the same pattern as in English (N400 plus late positivity). The cross-linguistic difference in the response to borderline anomalies was replicated in two additional studies with a slightly modified task (Experiment 2a: German; Experiment 2b: English), with a reliable LANGUAGE×ANOMALY interaction for the borderline anomalies confirming that the N400 effect is subject to systematic cross-linguistic variation. We argue that this variation results from differences in the language-specific default weighting of top-down and bottom-up information, concluding that N400 amplitude reflects the interaction between the two information sources in the form-to-meaning mapping.


Subject(s)
Awareness/physiology , Brain Mapping , Evoked Potentials/physiology , Linguistics , Semantics , Acoustic Stimulation , Adolescent , Adult , Electroencephalography , Female , Humans , Male , Surveys and Questionnaires , Translating , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...